Non-Computable Functions in Optimality Theory

نویسنده

  • Elliott Moreton
چکیده

Is Optimality Theory a constraining theory? A formal analysis shows that it is, if two auxiliary assumptions are made: (1) that only markedness and faithfulness constraints are allowed, and (2) that input and output representations are made from the same elements. Such OT grammars turn out to be incapable of computing circular or infinite chain shifts. These theoretical predictions are borne out by a wide range of natural phonological processes including augmentation, alternations with zero, metathesis, and exchange rules. The results confirm, extend, and account for the observations of Anderson & Browne (1973) on exchange rules in phonology and morphology. Non-computable functions in Optimality Theory 1 1 1 . Introduction1 "Phonological theory", say Prince and Smolensky (1993, herinafter P&S, p. 67), "contains two parts: a theory of substantive universals of phonological well-formedness, and a theory of formal universals of constraint interaction." The theory they espouse, Optimality Theory (hereinafter OT), claims to provide an exhaustive statement of how constraints interact, leaving the grammarian with just two tasks: to discover what the universal constraints are, and how they are ranked in particular languages. In this view, languages differ only in how they rank the universal constraint set. Predictions about what will turn up in natural languages follow from P&S's "factorial typology" — one looks at the empirical consequences of all possible rankings of the universal constraint set. This naturally involves knowing what the constraints actually are. There is another way to go about it, which this paper will illustrate. If we have reason to believe that every constraint (or at least every constraint relevant to a particular question) has such-and-such a property, it may follow that the constraint hierarchy as a whole is incapable of computing certain mappings from underlying to surface forms. A local property of each individual constraint can entail a global property of the whole grammar, thus allowing us to make predictions without exact knowledge of the constraints. We will use this method to test a possibility that has haunted the OT literature for some time (Kirchner 1995), and which can be roughly stated as follows: (1) There are only markedness and faithfulness constraints. This will turn out to be an over-broad generalization. In order to prove our theorem we will be compelled to narrow it down to assert only that certain kinds of constraints must be either of the markedness or the faithfulness variety — again roughly speaking, those constraints which refer exclusively to segmental phonology or low-level prosodic features. As a necessary intermediate step, we will propose precise formal definitions of markedness and faithfulness. It will be seen to follow that phonological processes involving only those elements cannot create certain kinds of exchange-rule, metathesis, or augmentation effects, while processes involving other elements (such as morphology) can do so. This extends and explains an observation originally due to Anderson & Browne (1973) to the effect that exchange-rule effects are always morphologically triggered. The paper is organized as follows: Section 2 translates OT and our auxiliary hypotheses into formal terms so that we can prove theorems about them. The actual proofs are done in Section 3. Section 4 draws real-world conclusions and tests them against the empirical data. A final summing-up and discussion are to be found in Section 5. 1 This paper owes a lot to the efforts of people besides the author. I would like to thank Isadora Cohen, Shu-Chen Susan Huang, André Isaak, Jo-Wang Lin, Mauricio Mixco, Willem de Reuse, Lisa Selkirk, Michael Walsh Dickey, and most particularly John McCarthy for consultation and critique. A special debt is owed the resourceful people at the UMass Interlibrary Loan. The 1999 ROA archive version has been slightly edited from the original 1996 manuscript to remove minor typographical errors and snide comments. Correspondence is welcome; please address it to [email protected] or Department of Linguistics, University of Massachusetts, Amherst, MA 01003, USA. Non-computable functions in Optimality Theory 2 2 2 . Formal properties of Optimality Theory In order to ask formally, "What can OT do?", we need precise formal definitions of "OT" and "do". We'll begin with the easier task, introducing the notion of computability by a grammar in Section 2.1 as a formalization of "do". Formalization of "OT" will proceed by stages, starting in Section 2.2 with a broader class of formal grammars called constraint-hierarchy grammars which will turn out to be computationally uninteresting —for any given function of the relevant type, there exists a constraint-hierarchy grammar that computes it. Section 2.3 introduces the auxiliary assumptions which define a computationally much more interesting model called here “classical OT”, in which underlying and surface representations are constructed in the same fashion (“homogeneity”), there is always a fully faithful candidate (“inclusivity”), and every constraint is either a markedness constraint or a faithfulness constraint (“conservativity”). 2.1. Phonological grammars Phonological theories normally involve at least three levels of linguistic representation: • The underlying (phonological) representation, assembled directly from information stored in the lexicon. This forms the input to the phonological component of the grammar. Underlying representations will be written between slashes: /kentapal/. • The surface (phonological) representation, which is the output of the phonological component and input to the phonetic component, here written between square brackets: [kentapal]. • The phonetic representation, which is what the speaker actually does with their mouth, written here between double quotes: "kentapal". Optimality Theory is a theory of constraint interaction, not of representations. We want our deductions about OT to hold even if the theory of representations changes. Hence, we have to treat input and output forms as unanalyzable atoms: The term grammar will refer to a function G from a countable set A ("inputs") to subsets of another countable set B ("outputs"). A grammar G such that the inputs and outputs are precisely the underlying and surface phonological representations, respectively, of a natural language, and such that G performs the same mapping as a speaker's phonological competence, will be called a natural grammar. In natural grammars, these subsets of B usually contain only one element: free phonological (as distinguished from phonetic) variation is less common, and will be ignored here. In order to simplify exposition, we shall incorrectly assume that all natural grammars have the property of exactness: (2) Defn Let A and B be two countable sets, and suppose G: A → 2B is such that ∀ a ∈ A, |G(a)| = 1. Then G is said to be exact. Non-computable functions in Optimality Theory 3 3 Confining our interest to the class of exact grammars, we are now able to formulate the question, “what does a grammar do?”. (3) Defn Suppose A and B are countable sets, and G : A → 2Bis an exact grammar. We say that G computes the function f: A → B if ∀ a ∈ A, G(a) = {f(a)}. Our question now becomes: for a given class of exact grammars from A → 2B, what functions can be computed by a member of that class? 2.2. Constraint-hierarchy grammars and Optimality Theory Let A, B be countable sets. (4) Defn A constraint over A × B is any function C:A × B→ N such that the domain of C is all of A × B. If a ∈ A, b ∈ B, we write C/a/[b] (rather than the more usual C(a,b) — as a reminder of which argument corresponds to the underlying representation, and which to the surface representation). Then a is called the input to C, and b is called the candidate. The value of C/a/[b] is called the score awarded by C to the candidate b for the input a.2 (5) Defn A constraint hierarchy over A × B is an ordered n-tuple C = (C1, ... , Cn), where each Ci is a constraint over A × B.3 (6) Defn A score vector (of length n)is an ordered n-tuple v = (r1, ... , rn), where ri ∈ N. (7) Defn Let v = (r1, ... , rn), v' = (r'1, ... , r'n) be score vectors. We say v< v' iff ∃ k such that (i) ∀ i < k: ri = r'i (ii) rk < r'k 2 Intuitively, a constraint is a measure of how bad a given input-candidate pair is in a particular respect. Here we are allowing the badness score to be no smaller than zero, and to be arbitrarily large. We have to justify both of these assumptions. In constructing a grammar to describe competence in a particular language, one might find it convenient, and linguistically "insightful", to hypothesize a constraint C which gives negative scores. If there is a number n which is the smallest score that C ever awards, then C is just a notational variant of an orthodox constraint C', where C'/a/[b] = C/a/[b] + n. Replacing C with C' does not change the output of any constraint-hierarchy grammar containing C. If allowing negative scores is to have any detectable effect, we must allow arbitrarily large negative scores. But then we are no longer guaranteed that Lemma (8) will hold; it is possible to construct sets of score vectors with no smallest element. The "detectable effect" is to make G possibly uncomputable. Hence, we can get along without negative-valued constraints. To forbid constraints which give arbitrarily large positive scores would certainly simplify the definition of the ≤-relation on score vectors, since then, as P&S (p. 200) point out, there would be a simple order-preserving isomorphism between score vectors and the natural numbers. However, any such constraint-hierarchy grammar would just be a special case of the class of grammars described here, so the only effect on our arguments in this paper would be to make the proof of Lemma (8) trivial. 3 I'm ignoring the possiblity that there might be infinitely many constraints, because it complicates matters and is not linguistically plausible. Non-computable functions in Optimality Theory 4 4 If it is not true that v < v', we say v ≥ v'. If v < v' or v = v', we say v ≤ v'. The < relation is clearly transitive, antisymmetric, and irreflexive, so it is a total ordering. (8) Lemma Let V be a nonempty set of score vectors of length n. Then V contains a minimal element; i.e., ∃ v0 ∈V such that ∀ v ∈ V, we have v0 ≤ v.4 Proof Choose any v* ∈ V, where v* = (r1*, ... , rn*). Let L = {v' = (r1', ... , rn') | ri' ≤ ri* for all i ≤ n}. Then L is finite, since the ri' are bounded below by 0 and above by ri*. It follows that L ∩ V is finite, and therefore (since < is a total ordering of V) contains a minimal element v0. Then v0 is also a minimal element of V: Let v ∈ V. If v ∈ L, then v0 < V by minimality of v0 in L. If v ∉ L, then v* < v by definition of L, so by transitivity v0 < v. Hence v0 is minimal in V. (9) Defn Let C = (C1, ... , Cn) be a constraint hierarchy over A × B, a ∈ A, b ∈ B. We define C/s/[s'] to be the score vector (C1/a/[b], ... , Cn/a/[b]), and say that C/a/[b] is the score awarded by C to the candidate b relative to the input a. (10) Defn Let G be a 3-tuple (A × B, Gen, C) such that ( i ) A × B is a countable set; ( i i ) Gen:A→ 2B is such that ∀ a ∈ A, Gen(s) ≠ ∅; ( i i i ) C is a constraint hierarchy over A × B. For any a∈ A we define G(s) = {b0∈ B | C/a/[b0] ≤ C/a/[b] for all b ∈ Gen(a)} Then G is said to be a constraint-hierarchy grammar. The element a is called the input, the set Gen(a) is the candidate set, and the set G(b) is the output. (11) Lemma If G = (A × B, Gen, C) is a constraint-hierarchy grammar, then ∀ a ∈ A, G(a) ≠ ∅. Proof By (ii) of (9) above, Gen(a) ≠ ∅. Hence we can apply Lemma () to the set V = {C/a/[b] | b ∈ Gen(a)}. This set contains a minimal element v0, and G(a) is simply the (obviously nonempty) subset of S whose elements receive a score of v0 relative to a. QED. Constraint-hierarchy grammars are, as a class, uninteresting: (12) Lemma Let A, B be any countable sets, and let f:A → B be any function defined on all of A. Then there exists a constraint-hierarchy grammar G = (A × B, Gen, C) that computes f. Proof There are at least two equally trivial ways to construct such a grammar. (i) (Trivial Gen) For any a ∈ A, let Gen(s) = A. Let C = (C1), where for any a ∈ A, b ∈ B, C1 is given by 4 This lemma is essential for the success of P&S's H-eval; without it there is no guarantee that the algorithm will terminate. Non-computable functions in Optimality Theory 5 5 C1/a/[b] = 0 iff b = f(a) C1/a/[b] = 1 otherwise. Then obviously ∀ a ∈ A, G(a) = {f(a)}. (ii) (Trivial C) For any a ∈ A, let Gen(a) = {f(a)}. Let C = (C1), where C1 is the trivial constraint such that for any a ∈ A, b ∈ B, C1/a/[b] = 0. Then obviously ∀ a ∈ A, G(a) = {f(a)}. The claim that every natural-language grammar can be computed by some constraint-hierarchy grammar is therefore an empty one. We already know it to be true, since it rules nothing out. Definition (10) provides a framework or notational device, rather than a theory of language. If we want falsifiable predictions, we will have to constrain Gen() and C to prevent the tricks used in (12 i) and (12 ii). The next section of this paper investigates a widely-used model which does so constrain them, and which consequently cannot compute certain functions. 2.3. The “classical OT” model The model we are going to look at is a variant of the standard version of Optimality Theory, as presented in, e.g., Prince & Smolensky (1993), and as commonly practiced in OT work, such as that collected in Beckman, Walsh Dickey, and Urbanczyk (1995) and the Rutgers Optimality Archive (http://ruccs.rutgers.edu/roa.html). This framework, which we will call "Classical OT", makes two assumptions which disallow the tricks used in (12). One is that for any input /a/, [a] ∈ Gen(/a/); that is, the candidate set always contains a fully faithful candidate. This forestalls (12 i), the trivial-C trick. The other is that there are only two types of constraint: faithfulness constraints, which favor candidates that are like the input over those that differ from it, and markedness constraints, which favor candidates that have (lack) some configuration over those that lack (have) it. Intuitively, markedness constraints represent the tendency of a grammar to prefer certain surface forms over others, while faithfulness constraints represent the tendency to keep the output like the input. This disposes of (12 ii), the trivial-Gen() trick, since the single constraint needed to make the trick work will not necessarily belong to either of these categories. The first of these assumptions is standard (see, e.g., P&S p. 80); the second, while implicit in much OT work (e.g., Kirchner 1995), has not to my knowledge been explicitly and seriously proposed — and with good reason, as we shall see. It will be necessary to add one more hypothesis, homogeneity, which asserts that input and output representations are made out of the same structural elements. This is not a standard OT assumption, nor are we proposing it as an axiom about natural language. Rather, it will be used in the theoretical argument to prove a theorem, and in the empirical argument to single out the class of real-world processes to which we expect the theorem to apply. In the following subsections (2.3.1 2.3.5), we formalize the classical OT assumptions and briefly discuss their interpretation. The next section, Section 3, will explore the computational power of classical OT grammars, and show how its hypotheses restrict the class of computable functions. Non-computable functions in Optimality Theory 6 6 2.3.1. Gen() and the fully faithful candidate This is an uncontroversial assumption5, whose formalization is repeated here in (13): (13) Defn If for some constraint-hierarchy grammar (A × B, Gen, C) we have ∀ a ∈ A, [a] ∈ Gen (/a/), then Gen() is said to be inclusive. Classical OT grammars have inclusive Gen() functions; P&S (p. 80) even propose a version in which Gen(/a/) = A for all a ∈ A. Inclusivity implies that A ⊆ B. This will be take up again in when homogeneity is discussed in Section 2.3.4. 2.3.2. Markedness constraints A markedness constraint is one that ignores the input and looks only at the candidate: (14) Defn If C is a constraint over A × B such that ∀ a, a' ∈ A, C/a/ = C/a'/, then C is a markedness constraint.6 Markedness constraints are therefore those that penalize ill-formed surface structures. Examples include (15) Ex The Onset Constraint (ONS). Syllables must have onsets (P&S, p. 16). (16) Ex *PL/Lab. Don't have [+labial] Place (P&S, p. 181). (17) Ex OCP [Obligatory Contour Principle]. Adjacent identical elements are prohibited (adapted from Urbanczyk 1995, originally due to Leben and to Goldsmith). (18) Ex CODA-COND. A [Place] node linked to a coda position must also be linked to something else. (Itô 1986). Since the input is irrelevant to a markedness constraint, we can omit the first argument and write simply "C[b]". 2.3.3. Faithfulness constraints A faithfulness constraint is one which always gives a perfect score to the fully faithful candidate. (19) Defn If C is a constraint over A × B such that ∀ a ∈ A, C/a/[a] = 0, then C is a faithfulness constraint. Familiar examples include7 5 McCarthy & Prince (1995) represent the underlying form of the reduplicative morpheme as an abstract symbol RED, which cannot surface at all since it is not of the right representational type to appear in outputs. 6 The constraint C is of course really a two-place function; however, C/a/is a one-place function obtained by saturating the first argument of C. 7 Surprisingly, P&S's constraints FILL and PARSE are not faithfulness constraints at all under this definition, but rather markedness constraints. The P&S Gen() function neither Non-computable functions in Optimality Theory 7 7 (20) Ex DEP-IO. Every segment of the output has a correspondent in the input [discourages epenthesis] (McCarthy & Prince, 1995) (21) Ex MAX-IO. Every segment of the input has a correspondent in the output [discourages deletion] (McCarthy & Prince 1995). (22) Ex IDENT-IO(nas) If an input segment and an output segment are in correspondence, they have the same value of [nasal] (McCarthy & Prince, 1995). 2.3.4. Homogeneity of inputs and outputs We have saved the most unintuitive assumption for last. It will be both convenient and crucial to assume further that A = B, that is, that the grammar is homogeneous. Convenient, because it allows us to state and prove theorems about constraint interaction much more simply; crucial, because (as we shall see later) it is precisely the homogeneous part of natural grammars that is well-behaved with respect to the theorems we shall derive. (23) A grammar G: A → 2B is said to be homogeneous if A = B. Once we start talking about formal grammars that mimic natural grammars, A will become the set of possible underlying representations, and B the set of possible surface representations. A real linguistic theory defines A and B by enumerating the formal elements used to build these representations (nodes, association lines, etc.) and the rules for putting them together. To claim that A = B is to claim that, in the grammars of present interest, underlying and surface phonological representations are made of the same components, assembled in the same way. Homogeneity is not a credible condition on OT grammars of natural languages. Right from the very start, OT grammar fragments have assumed the input to contain structures never found in outputs, and vice versa. It is, for example, a common assumption that the input to the "phonological" component of the grammar is heavily annotated with nonphonological information — phonologically empty morphemes (e.g., McCarthy & Prince 1995), morphological constituency (e.g., Beckman 1995), Level 1 vs. Level 2 affix class (e.g, Benua 1996), part of speech and case (e.g, P&S's Lardil), syntactic constituency (Truckenbrodt 1996), and much more. None of these annotations can be changed by Gen(), no output is ever unfaithful to them, and none are in the candidate output set.8 epenthesizes nor deletes segments; it only constructs a prosodic parse of the input; all candidates therefore have the same segments parsed in different ways. FILL penalizes candidates with empty prosodic structure; PARSE penalizes candidates with unparsed segments. Neither one needs to refer to the input in order to assign scores. The fully faithful candidate (which has no prosodic structure at all) will either trivially satisfy or trivially violate either one, depending on how the constraints are formulated, so they may or may not be faithfulness constraints as defined here. 8 True, it is conventional to represent morpheme boundaries in the candidates. This is just to make them easier to read. With the advent of Correspondence Theory (McCarthy & Prince 1995), there is no reason to mark boundaries in the candidates -since Gen() can't change morphological affiliations, one can tell which morpheme a given surface segment is affiliated to by looking at its correspondent in the input. Furthermore, as John McCarthy (p.c., 1995) pointed out in reference to an earlier draft of this paper, leaving the boundary marks in the candidates leads to absurdity if, say, two segments separated only by a morpheme boundary are Non-computable functions in Optimality Theory 8 8 There are also structures commonly supposed to be present in the output but not the input. Phonological phrase boundaries, for instance, seem to be determined entirely by the interaction of constraints (Truckenbrodt 1996). After all, where would the phrasing in the input come from? Not the lexicon, surely. And not the syntax alone either, since phrasing exhibits highly unsyntactic properties like rateand length-sensitivity (Shih 1986, Du 1988). Moreover, if a phrase boundary somehow made it into the input, Truckenbrodt's constraints would in effect erase it and overwrite it; the output would be faithful only by coincidence. But most "phonological" representations are in fact present in both input and output: distinctive features, low-level prosodic constituency such as syllables or prosodic words, association lines. These are the elements that turn up in the lexicon, and that can be changed by Gen(). Homogeneous grammars can deal with much of the core business of phonology — assimilation, dissimilation, segmental inventories, phonotactics, syllable structure, phonologically conditioned allophony, anything resulting from the influence of sounds on other sounds. In a natural grammar, those representational elements that are found in both inputs and outputs we may call homogeneous elements.9 Those constraints which refer only to homogeneous elements in both arguments we may call homogeneous constraints. Any natural grammar will have nonhomogeneous elements and nonhomogeneous constraints, but it will also have homogeneous elements and homogeneous constraints. This paper will argue that, if we confine our attention to phonological processes involving only homogeneous elements, we will need only conservative constraints to implement them. We will accordingly so confine our attention for the rest of the theoretical discussion (up to the end of Part 3). 2.3.5. Summary: Classical OT The model of OT with which we are concerned here is thus the one satisfying the following postulate: (24) Defn Let G = (S × S, Gen, C) be a homogenous constraint-hierarchy grammar. If every Ci is either a markedness constraint or a faithfulness constraint, we say that C is conservative. If, in addition, Gen() is inclusive and G is exact, then we say that G is a classical OT grammar . 3 . Computable functions in classical OT grammars We are now in a position to show that classical OT is a constraining theory; there are functions it cannot compute, and hence phonological phenomena it predicts to be nonexistent. Our argument can be stated informally as follows: metathesized. What happens to the plus juncture then? This half of homogeneity is like "Consistency of Exponence" (McCarthy & Prince 1994), only a bit stronger, since it applies to more than just morphology. 9 The stock of available representational elements is here assumed to be universal; so is the status of each one as homogeneous or non-homogeneous. A homogeneous element is homogeneous in all languages -Richness of the Base (P&S) makes it available in the input, and Gen() makes it available in the output. If that element never surfaces in a given language, we assume that some constraints conspire to stifle it. Non-computable functions in Optimality Theory 9 9 The requirement in (24) of a conservative C and inclusive Gen() means that if the output is not identical to the input, it must be less marked than the input. To see why, notice that the inclusive-Gen() requirement means that the candidate set always contains the input, which scores perfectly on all faithfulness constraints. Since the output can't do better on the faithfulness constraints, it must do better on the markedness constraints. If, therefore, a classical OT grammar sends underlying /A/ to surface [B], then [B] must be less marked than the fully faithful candidate [A]. It follows that the grammar cannot also send underlying /B/ to surface [A], since this would entail that [A] is less marked than the fully faithful candidate [B], and hence that [A] is less marked than itself. We will show more generally that a classical OT grammar cannot compute circular chain shifts — any function that sends /A1/ → [A2], /A2/ → [A3], ... , /An/ → [A1] — or infinite chain shifts — any function that sends /A1/ → [A2], /A2/ → [A3], .... Furthermore, this is an if-and-only-if theorem: any function that does not give rise to a circular or infinite chain shift can be computed by a classical OT grammar. These statements are formalized and proven in Section 3.1 as the characterization theorem for classical OT grammars, defining precisely what functions classical OT can and cannot compute. 3.1. The characterization theorem We begin by establishing that the output of a classical OT grammar is either the input, or something less marked than the input. (25) Defn Let C be a conservative constraint hierarchy. Define CM and CF to be the constraint hierarchies consisting respectively of the markedness constraints (CM1, ... , CMp) and the faithfulness constraints (CF1, ... , CFq) of C, such that the dominance relations in CM and CF are the same as those in C. (26) Lemma Let G = (S, Gen, C) be a classical OT grammar, and suppose a, b ∈ S are such that a≠b and G(a) = {b}. Then CM[a] > CM[b]. Proof By definition of the function G(s), we have C/a/[a] > C/a/[b]. By definition of faithfulness, though, we have CF/a/[a] = (0, 0, ... , 0) ≤ CF/a/[b]. If it were also true that CM[a] ≤ CM[b], then it would follow that C/a/[a] ≤ C/a/[b], contradicting our hypothesis. Hence CM[a] > CM[b]. QED. This sharply restricts the set of computable functions: (27) Defn Let f:S→S be any function. Let fn(s) represent f(f(...(s)...)), with f iterated n times. let f0(s) = s. Suppose that for any s ∈ S, there is a smallest number π(s) such that fπ(s)(s) = fπ(s)+1(s). Then we say that f is eventually idempotent, and that π(s) is the potential of s under f. (28) Thm (characterization theorem for classical OT grammars) Let f:S→S be any function. Then f is computable by a classical OT grammar if and only if f is eventually idempotent. Proof (⇒) Suppose f is computable by a classical OT grammar G = (S, C). Let a be any element of S, and consider the sequence A = ({a}, G(a), G2(a), G3(a), ...), where again Non-computable functions in Optimality Theory 10 10 Gk(a) represents G iterated k times on a. (Each Gi(a) is guaranteed to exist by the homogeneity of G.) For convenience's sake let us write A = ({a0},{a1}, {a2}, ...). Let M = {CM(a0), CM(a1), ...}. This is a set of score vectors, so by Lemma (8) it has a minimal element CM(aK) for some K. Then CM(aK) ≤ CM(aK+1). By Lemma (8), this means aK = aK+1. But ak = fk(a), so fK(a) = fK+1(a), making f eventually idempotent. QED. (⇐) Suppose f is eventually idempotent. We will construct a classical OT grammar G = (S, C) that computes f. Let C = (C1, C2), where for any a, b ∈ S , C1/a/[b] = 0 iff b ∈ {a, f(a)} C1/a/[b] = 1 otherwise C2[b] = π(b) where π(b) is the potential of b under f, a number which is guaranteed to exist since f is eventually idempotent. Then G is a classical OT grammar, since C1 is a faithfulness constraint and C2 is a markedness constraint. For any input a ∈ S , the candidates will receive the following score vectors: Candidate Score Vector a (0, π(a)) f(a) (0, π(f(a))) others (1, unknown) The set containing the candidate with the smallest score vector is G(a). Clearly, if f(a) = a, then G(a) = a = {f(a)}. If f(a) ≠ a, then π(f(a)) = π(a)-1, so (0, π(f(a))) < (0, π(a)), and again G(a) = {f(a)}. QED. (29) Corollary Suppose G = (S, C) is an OT grammar. Then there exists a classical OT grammar10 G' = (S, C') such that G' computes the same function as G, and C' = (C'1, C'2). Proof Done already in the proof of (28). So everything that can be done in classical OT can be done with just one faithfulness constraint and one markedness constraint! This does not mean that classical OT is in any sense trivial. Quantifier order is important here. What Corollary (29) says is that for any eventually idempotent function, there exists a two-constraint classical OT grammar that computes it. What OT claims is that there exists a set of constraints such that for any phonological system found in nature, the constraints can be ranked to give a classical OT grammar that computes it. Two constraints that would handle, say, Berber, could not be reranked to yield any other language, nor are they likely to be at all linguistically insightful or psychologically real. Any list of hypothetical constraints that might plausibly be the universal constraints will be much more interesting than that. 10 Not a unique one, though. In the construction of C'2, we can replace "1" with "2" or "1776" or whatever we please, as long as it doesn't equal 0. Non-computable functions in Optimality Theory 11 11 3.2. Phonological processes incompatible with classical OT Classical OT can compute all and only eventually idempotent functions; hence, a natural grammar that was not eventually idempotent would be a clear counterexample. What makes a function f:S → S fail to be eventually idempotent? There are only two possibilities: Infinite chain shift: This occurs when S contains an element s such that s, f(s), f2(s), ... are all distinct, as shown schematically in Figure (30). Circular chain shift: This occurs when S contains an element s such that for some k > 0, fk(s) = s, as shown schematically in Figure (31). (29) (30) Infinite chain shift Circular chain shift

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimality and Duality for an Efficient Solution of Multiobjective Nonlinear Fractional Programming Problem Involving Semilocally Convex Functions

In this paper, the problem under consideration is multiobjective non-linear fractional programming problem involving semilocally convex and related functions. We have discussed the interrelation between the solution sets involving properly efficient solutions of multiobjective fractional programming and corresponding scalar fractional programming problem. Necessary and sufficient optimality...

متن کامل

Hypercomputability. An application of non standard methods to recursion theory and computable analysis

The aim of this talk is to apply non standard methods to recursion theory [5] and computable analysis [7]. Some direct consequences of this approach are a (new) notion of hypercomputable function and the extension of the theory of representations (see [7]) to non standard universes. The hypercomputable functions we will introduce can be considered as functions computed by abstract Turing machin...

متن کامل

Degrees of unsolvability of continuous functions

We show that the Turing degrees are not sufficient to measure the complexity of continuous functions on [0, 1]. Computability of continuous real functions is a standard notion from computable analysis. However, no satisfactory theory of degrees of continuous functions exists. We introduce the continuous degrees and prove that they are a proper extension of the Turing degrees and a proper substr...

متن کامل

Nonparametric General Reinforcement Learning

Reinforcement learning problems are often phrased in terms of Markov decision processes (MDPs). In this thesis we go beyond MDPs and consider reinforcement learning in environments that are non-Markovian, non-ergodic and only partially observable. Our focus is not on practical algorithms, but rather on the fundamental underlying problems: How do we balance exploration and exploitation? How do w...

متن کامل

Irreversible computable functions

The strong relationship between topology and computations has played a central role in the development of several branches of theoretical computer science: foundations of functional programming, computational geometry, computability theory, computable analysis. Often it happens that a given function is not computable simply because it is not continuous. In many cases, the function can moreover ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999